Whether the learned features have better invariance in the significant intra-class changes will determine the upper limit of performance of the Person Re-identification (ReID) model. Environmental light, image resolution change, motion blur and other factors may cause color deviation of pedestrian images, and these problems will cause overfitting of the model to color information of the data, thus limiting the performance of the model. By simulating the color information loss of the data samples and highlighting the structural information of the samples, the model was helped to learn more robust features. Specifically, during model training, the training batch was randomly selected according to the set probability, and then a rectangular area of the image or the entire image was randomly selected for each RGB image sample in the selected batch, and the pixels of the selected area was replaced with the pixels of the same rectangular area in the corresponding grayscale image, thus generating a training image with different grayscale areas. Experimental results demonstrate that compared with the benchmark model, the proposed method achieves a significant performance improvement of 3.3 percentage points at most on the evaluation index mean Average Precision (mAP), and performs well on multiple datasets.